Picture for Yuhao Dong

Yuhao Dong

Kimi K2.5: Visual Agentic Intelligence

Add code
Feb 02, 2026
Viaarxiv icon

The RoboSense Challenge: Sense Anything, Navigate Anywhere, Adapt Across Platforms

Add code
Jan 08, 2026
Viaarxiv icon

Visual Grounding from Event Cameras

Add code
Sep 11, 2025
Viaarxiv icon

Talk2Event: Grounded Understanding of Dynamic Scenes from Event Cameras

Add code
Jul 23, 2025
Viaarxiv icon

ShotBench: Expert-Level Cinematic Understanding in Vision-Language Models

Add code
Jun 26, 2025
Viaarxiv icon

Ego-R1: Chain-of-Tool-Thought for Ultra-Long Egocentric Video Reasoning

Add code
Jun 16, 2025
Viaarxiv icon

EgoLife: Towards Egocentric Life Assistant

Add code
Mar 05, 2025
Viaarxiv icon

Ola: Pushing the Frontiers of Omni-Modal Language Model with Progressive Modality Alignment

Add code
Feb 06, 2025
Figure 1 for Ola: Pushing the Frontiers of Omni-Modal Language Model with Progressive Modality Alignment
Figure 2 for Ola: Pushing the Frontiers of Omni-Modal Language Model with Progressive Modality Alignment
Figure 3 for Ola: Pushing the Frontiers of Omni-Modal Language Model with Progressive Modality Alignment
Figure 4 for Ola: Pushing the Frontiers of Omni-Modal Language Model with Progressive Modality Alignment
Viaarxiv icon

Are VLMs Ready for Autonomous Driving? An Empirical Study from the Reliability, Data, and Metric Perspectives

Add code
Jan 07, 2025
Figure 1 for Are VLMs Ready for Autonomous Driving? An Empirical Study from the Reliability, Data, and Metric Perspectives
Figure 2 for Are VLMs Ready for Autonomous Driving? An Empirical Study from the Reliability, Data, and Metric Perspectives
Figure 3 for Are VLMs Ready for Autonomous Driving? An Empirical Study from the Reliability, Data, and Metric Perspectives
Figure 4 for Are VLMs Ready for Autonomous Driving? An Empirical Study from the Reliability, Data, and Metric Perspectives
Viaarxiv icon

Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models

Add code
Nov 21, 2024
Figure 1 for Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models
Figure 2 for Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models
Figure 3 for Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models
Figure 4 for Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models
Viaarxiv icon